15 research outputs found

    Answer type validation in question answering systems

    Get PDF
    International audienceIn open-domain question-answering systems, numerous questions wait for answers of an explicit type. For example, the question ``Which president succeeded Jacques Chirac?'' requires an instance of president as the answer. The method we present in this article aims at verifying that an answer given by a system corresponds to the given type. This verification is done by combining criteria provided by different methods dedicated to verifying the appropriateness between an answer and a type. The first types of criteria are statistical and compute the presence rate of both the answer and the type in documents, other criteria rely on named entity recognizers and the last criteria are based on the use of Wikipedia

    Methods combination and ML-based re-ranking of multiple hypothesis for question-answering systems

    Get PDF
    International audienceQuestion answering systems answer correctly to different questions because they are based on different strategies. In order to increase the number of questions which can be answered by a single process, we propose solutions to combine two question answering systems, QAVAL and RITEL. QAVAL proceeds by selecting short passages, annotates them by question terms, and then extracts from them answers which are ordered by a machine learning validation process. RITEL develops a multi-level analysis of questions and documents. Answers are extracted and ordered according to two strategies: by exploiting the redundancy of candidates and a Bayesian model. In order to merge the system results, we developed different methods either by merging passages before answer ordering, or by merging end-results. The fusion of end-results is realized by voting, merging, and by a machine learning process on answer characteristics, which lead to an improvement of the best system results of 19 %

    Validation du type de la réponse dans un systÚme de questions réponses

    Get PDF
    National audienceDans le cadre de la recherche de rĂ©ponse Ă  une question posĂ©e en langue naturelle dans des textes, de nombreuses questions attendent une rĂ©ponse d’un certain type. Par exemple la question « Quel prĂ©sident succĂ©da Ă  Jacques Chirac ? » attend en rĂ©ponse une entitĂ© du type prĂ©sident. La mĂ©thode prĂ©sentĂ©e dans cet article vĂ©rifie que la rĂ©ponse renvoyĂ©e est bien du type cherchĂ©. Pour cela elle suit une approche par apprentissage automatique en utilisant trois types de critĂšres. Les premiers sont statistiques et calculent entre autre la frĂ©quence d’apparition de la rĂ©ponse avec le type dans un ensemble de documents. Les seconds utilisent des entitĂ©s nommĂ©es et les derniers l’encyclopĂ©die Wikipedi

    Justification of Answers by Verification of Dependency Relations-The French AVE Task.

    Get PDF
    International audienceThis paper presents LIMSI results in Answer Validation Exercise (AVE) 2008 for French. We tested two approaches during this campaign: a syntax-based strategy and a machine learning strategy. Results of both approaches are presented and discussed

    Fusion des réponses de systÚmes de question-réponses.

    Get PDF
    National audienceLes rĂ©ponses donnĂ©es par plusieurs systĂšmes de questions-rĂ©ponses proviennent de l’application de stratĂ©gies diffĂ©rentes, et de ce fait permettent de rĂ©pondre Ă  des questions diffĂ©rentes. La combinaison de ces systĂšmes vise alors Ă  accro\ⁱtre le nombre total de questions rĂ©solues. Cet article prĂ©sente la combinaison de trois systĂšmes : QAVAL, qui s’appuie sur un module de validation de rĂ©ponses et deux versions du systĂšmes RITEL qui s’appuie sur une analyse multi-niveaux appliquĂ©e aux questions et aux documents. La fusion des rĂ©sultats est effectuĂ©e de diffĂ©rentes maniĂšres : en fusionnant les passages, Ă  la sortie des systĂšmes par vote ou fusion en tenant compte du poids ou du rang des rĂ©ponses proposĂ©es et par un mĂ©canisme d’apprentissage sur les caractĂ©ristiques des rĂ©ponse

    Towards an automatic validation of answers in Question Answering

    Get PDF
    International audienceQuestion answering (QA) aims at retrieving precise information from a large collection of documents. Different techniques can be used to find relevant information, and to compare these techniques, it is important to evaluate QA systems. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system for a question, according to the text snippet given to support it. We participated in such a task in 2006. In this article, we present our strategy for deciding if the snippets justify the answers: a strategy based on our own QA system, comparing the answers it returned with the answer to judge. We discuss our results, then we point out the difficulties of this task

    Evaluation de la réponse d'un systÚme de question-réponse et de sa justification.

    Get PDF
    National audienceLes systĂšmes de question-rĂ©ponse fournissent une rĂ©ponse Ă  une question en l’extrayant d’un ensemble de documents. Avec celle-ci ils fournissent Ă©galement un passage de texte permettant de la justifier. On peut alors chercher Ă  Ă©valuer si la rĂ©ponse proposĂ©e par un systĂšme est correcte et justifiĂ©e par le passage. Pour cela, nous nous sommes fondĂ©s sur la vĂ©rification de diffĂ©rents critĂšres : le premier tient compte de la proportion et du type des termes communs au passage et Ă  la question, le second de la proximitĂ© de ces termes par rapport Ă  la rĂ©ponse, le troisiĂšme compare la rĂ©ponse Ă  considĂ©rer avec celle obtenue par le systĂšme de question-rĂ©ponse FRASQUES utilisĂ© sur le passage Ă  juger et le dernier est une vĂ©rification du type de la rĂ©ponse. Les diffĂ©rents critĂšres sont ensuite combinĂ©s grĂące Ă  un classifieur utilisant les arbres de dĂ©cision

    Lexical validation of answers in question answering

    Get PDF
    International audienceQuestion answering (QA) aims at retrieving precise information from a large collection of documents, typically the Web. Different techniques can be used to find relevant information, and to compare these techniques, it is important to evaluate question answering systems. The objective of an Answer Validation task is to estimate the correctness of an answer returned by a QA system for a question, according to the text snippet given to support it. We participated in such a task in 2006. In this article, we present our strategy for deciding if the snippets justify the answers. We used a strategy based on our own question answering system, and compared the answers it returned with the answer to judge. We discuss our results, and show the possible extensions of our strategy. Then we point out the difficulties of this task, by examining different examples

    Selecting answers to questions from Web documents by a robust validation process

    Get PDF
    International audienceQuestion answering (QA) systems aim at finding answers to question posed in natural language using a collection of documents. When the collection is extracted from the Web, the structure and style of the texts are quite different from those of newspaper articles. We developed a QA system based on an answer validation process able to handle Web specificity. A large number of candidate answers are extracted from short passages in order to be validated according to question and passages characteristics. The validation module is based on a machine learning approach. It takes into account criteria characterizing both the passage and answer relevance at the surface, lexical, syntactic and semantic levels to deal with different types of texts. We present and compare results obtained for factual questions posed on a Web and on a newspaper collection. We show that our system outperforms a baseline by up to 48% in MRR

    A corpus for studying full answer justification

    Get PDF
    International audienceQuestion answering (QA) systems aim at retrieving precise information from a large collection of documents. To be considered as reliable by users, a QA system must provide elements to evaluate the answer. This notion of answer justification can also be useful when developing a QA system in order to give criteria for selecting correct answers. An answer justification can be found in a sentence, a passage made of several consecutive sentences or several passages of a document or several documents. Thus, we are interested in pinpointing the set of information that allows verifying the correctness of the answer in a candidate passage and the question elements that are missing in this passage. Moreover, the relevant information is often given in texts in a different form from the question form : anaphora, paraphrases, synonyms. In order to have a better idea of the importance of all the phenomena we underlined, and to provide enough examples at the QA developer’s disposal to study them, we decided to build an annotated corpus
    corecore